-
Notifications
You must be signed in to change notification settings - Fork 7
Calculate new "marks" field for question attempts in the database #731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Since an LLM-marked question may have multiple marks per question rather than a binary correct/incorrect
We previously allowed extracting all attempts for a page, but not for an individual question part. This adds that functionality.
In the context of the markbook/assignment progress, LLMFreeTextQuestionValidationResponses have a marksAwarded field that we need to extract the full question attempt to read, but we want to keep these attempts lightweight for all other question parts to minimise processing unnecessary data. This change checks each question part and extracts the full response for only LLMFreeText ones.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #731 +/- ##
==========================================
- Coverage 37.29% 37.22% -0.08%
==========================================
Files 536 536
Lines 23709 23763 +54
Branches 2861 2866 +5
==========================================
+ Hits 8843 8845 +2
- Misses 13984 14033 +49
- Partials 882 885 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
jsharkey13
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some of these are such minor things; if you don't care about grouping marks and correct in the string and database representations, feel free to dismiss all of those comments!
src/main/java/uk/ac/cam/cl/dtg/isaac/dos/LLMFreeTextQuestionValidationResponse.java
Outdated
Show resolved
Hide resolved
src/main/java/uk/ac/cam/cl/dtg/isaac/dos/LightweightQuestionValidationResponse.java
Outdated
Show resolved
Hide resolved
src/main/java/uk/ac/cam/cl/dtg/isaac/dos/QuestionValidationResponse.java
Outdated
Show resolved
Hide resolved
src/main/java/uk/ac/cam/cl/dtg/isaac/dto/LLMFreeTextQuestionValidationResponseDTO.java
Show resolved
Hide resolved
src/main/java/uk/ac/cam/cl/dtg/isaac/dto/QuestionValidationResponseDTO.java
Show resolved
Hide resolved
src/main/resources/db_scripts/migrations/2025-10-questions-attempts-add-mark-field.sql
Outdated
Show resolved
Hide resolved
src/main/resources/db_scripts/postgres-rutherford-create-script.sql
Outdated
Show resolved
Hide resolved
src/main/resources/db_scripts/postgres-rutherford-create-script.sql
Outdated
Show resolved
Hide resolved
src/test/java/uk/ac/cam/cl/dtg/isaac/api/managers/UserAttemptManagerTest.java
Outdated
Show resolved
Hide resolved
This will not happen for the database itself in the first instance since columns can only be added to the end, however whenever it is flushed and rebuilt the create script will enforce the desired ordering
We will be moving away from using the field anyway, so it's probably best to keep it consistent now and update the calculation more universally
|
Removing |
In the subsequent week's release, we will use another database migration script to update the columns
This is preparatory work for allowing the marks awarded for
LLMFreeTextQuestions(and potentially others in the future like Parsons and those using the Python Code Editor) to be displayed external to the question page itself, such as the markbook. This PR by itself should have no visible effect on either site.For most questions, marks are calculated as 1 if the answer is correct, and 0 if the answer is incorrect. For
LLMFreeTextQuestions, marks are derived from themarksAwardedfield of the question response. There is currently no straightforward meaningful way to do this for question types like Parsons without reference to the answer scheme, or Inline since it uses multiple responses.I've tested backwards compatibility and all seems fine since we are not touching the
question_attemptJSON object itself. The plan is to eventually phase out use of thecorrectcolumn entirely, but for now both exist simultaneously.Edit: Also sets the correctness criteria for
LLMFreeTextQuestionsto full marks rather than > 0 marks. For now, this may lead to temporary discrepancy for users on the old API but nothing breaking. For the future, we should consider how we can deal with this more broadly. Should we also be adding amaxMarkscolumn to the database, or are we okay extracting it from the question part whenever relevant since it should be a static value.Also also a note that the migration script copies the splitting dates from this older script, but I haven't updated the dates beyond what it includes yet.